Lower bounds for approximation by MLP neural networks

نویسندگان

  • Vitaly Maiorov
  • Allan Pinkus
چکیده

Abstract. The degree of approximation by a single hidden layer MLP model with n units in the hidden layer is bounded below by the degree of approximation by a linear combination of n ridge functions. We prove that there exists an analytic, strictly monotone, sigmoidal activation function for which this lower bound is essentially attained. We also prove, using this same activation function, that one can approximate arbitrarily well any continuous function on any compact domain by a two hidden layer MLP using a fixed finite number of units in each layer.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Approximation theory of the MLP

In this survey we discuss various approximation-theoretic problems that arise in the multilayer feedforward perceptron (MLP) model in neural networks. The MLP model is one of the more popular and practical of the many neural network models. Mathematically it is also one of the simpler models. Nonetheless the mathematics of this model is not well understood, and many of these problems are approx...

متن کامل

Neural Network Performance Analysis for Real Time Hand Gesture Tracking Based on Hu Moment and Hybrid Features

This paper presents a comparison study between the multilayer perceptron (MLP) and radial basis function (RBF) neural networks with supervised learning and back propagation algorithm to track hand gestures. Both networks have two output classes which are hand and face. Skin is detected by a regional based algorithm in the image, and then networks are applied on video sequences frame by frame in...

متن کامل

Minimum mean square estimation and neural networks

Neural networks for estimation, such as the multilayer perceptron (MLP) and functional link net (FLN), are shown to approximate the minimum mean square estimator rather than the maximum likelihood estimator or others. Cramer-Rao maximum a posteriori lower bounds on estimation error can therefore be used to approximately bound network training error, when a statistical signal model is available ...

متن کامل

آموزش شبکه عصبی MLP در فشرده‌سازی تصاویر با استفاده از روش GSA

Image compression is one of the important research fields in image processing. Up to now, different methods are presented for image compression. Neural network is one of these methods that has represented its good performance in many applications. The usual method in training of neural networks is error back propagation method that its drawbacks are late convergence and stopping in points of lo...

متن کامل

Bounding the Search Space for Global Optimization of Neural Networks Learning Error: An Interval Analysis Approach

Training a multilayer perceptron (MLP) with algorithms employing global search strategies has been an important research direction in the field of neural networks. Despite a number of significant results, an important matter concerning the bounds of the search region— typically defined as a box—where a global optimization method has to search for a potential global minimizer seems to be unresol...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • Neurocomputing

دوره 25  شماره 

صفحات  -

تاریخ انتشار 1999